7 research outputs found

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Virtual Reality based Telerobotics Framework with Depth Cameras

    Get PDF
    This work describes a virtual reality (VR) based robot teleoperation framework which relies on scene visualization from depth cameras and implements human-robot and human-scene interaction gestures. We suggest that mounting a camera on a slave robot's end-effector (an in-hand camera) allows the operator to achieve better visualization of the remote scene and improve task performance. We compared experimentally the operator's ability to understand the remote environment in different visualization modes: single external static camera, in-hand camera, in-hand and external static camera, in-hand camera with OctoMap occupancy mapping. The latter option provided the operator with a better understanding of the remote environment whilst requiring relatively small communication bandwidth. Consequently, we propose suitable grasping methods compatible with the VR based teleoperation with the in-hand camera. Video demonstration: https://youtu.be/3vZaEykMS_E

    Workspace Scaling and Rate Mode Control for Virtual Reality based Robot Teleoperation

    No full text
    We explored rate mode control for virtual reality (VR) based robot teleoperation with constant and variable mapping of the human-operator's joystick position to the speed (rate) of the robot's end-effector. The variable mapping depended on the visual scale of the virtual reconstruction of the remote environment to the scale of the real remote environment. We demonstrated how the rate mode control and variable scaling based on the VR reconstruction scale can be efficiently used for seated VR based robot teleoperation when the operator's arms are supported to reduce tiredness. The experimental study with five human participants demonstrated that variable mapping allowed participants to teleoperate the robot more effectively, by adjusting the VR visual scale albeit at a cost of increased perceived workload

    Tactile Classification of Object Materials for Virtual Reality based Robot Teleoperation

    No full text
    This work presents a method for tactile classification of materials for virtual reality (VR) based robot teleoperation. In our system, a human-operator uses a remotely controlled robot-manipulator with an optical fibre-based tactile and proximity sensor to scan surfaces of objects in a remote environment. Tactile and proximity data and the robot's end-effector state feedback are used for the classification of objects' materials which are then visualized in the VR reconstruction of the remote environment for each object. Machine learning techniques such as random forest, convolutional neural and multi-modal convolutional neural networks were used for material classification. The proposed system and methods were tested with five different materials and classification accuracy of 90 % and more was achieved. The results of material classification were successfully exploited for visualising the remote scene in the VR interface to provide more information to the human-operator

    A Suite of Robotic Solutions for Nuclear Waste Decommissioning

    Get PDF
    Dealing safely with nuclear waste is an imperative for the nuclear industry. Increasingly, robots are being developed to carry out complex tasks such as perceiving, grasping, cutting, and manipulating waste. Radioactive material can be sorted, and either stored safely or disposed of appropriately, entirely through the actions of remotely controlled robots. Radiological characterisation is also critical during the decommissioning of nuclear facilities. It involves the detection and labelling of radiation levels, waste materials, and contaminants, as well as determining other related parameters (e.g., thermal and chemical), with the data visualised as 3D scene models. This paper overviews work by researchers at the QMUL Centre for Advanced Robotics (ARQ), a partner in the UK EPSRC National Centre for Nuclear Robotics (NCNR), a consortium working on the development of radiation-hardened robots fit to handle nuclear waste. Three areas of nuclear-related research are covered here: human–robot interfaces for remote operations, sensor delivery, and intelligent robotic manipulation
    corecore